Current Issue : July - September Volume : 2019 Issue Number : 3 Articles : 5 Articles
Monocytes/macrophages have begun to emerge as key cellular modulators of brain homeostasis and central\nnervous system (CNS) disease. In the healthy brain, resident microglia are the predominant macrophage cell\npopulation; however, under conditions of blood-brain barrier leakage, peripheral monocytes/macrophages can\ninfiltrate the brain and participate in CNS disease pathogenesis. Distinguishing these two populations is often\nchallenging, owing to a paucity of universally accepted and reliable markers. To identify discriminatory marker sets\nfor microglia and peripheral monocytes/macrophages, we employed a large meta-analytic approach using five\npublished murine transcriptional datasets. Following hierarchical clustering, we filtered the top differentially\nexpressed genes (DEGs) through a brain cell type-specific sequencing database, which led to the identification of\neight microglia and eight peripheral monocyte/macrophage markers. We then validated their differential\nexpression, leveraging a published single cell RNA sequencing dataset and quantitative RT-PCR using freshly\nisolated microglia and peripheral monocytes/macrophages from two different mouse strains. We further verified the\ntranslation of these DEGs at the protein level. As top microglia DEGs, we identified P2ry12, Tmem119, Slc2a5 and\nFcrls, whereas Emilin2, Gda, Hp and Sell emerged as the best DEGs for identifying peripheral monocytes/\nmacrophages. Lastly, we evaluated their utility in discriminating monocyte/macrophage populations in the setting\nof brain pathology (glioma), and found that these DEG sets distinguished glioma-associated microglia from\nmacrophages in both RCAS and GL261 mouse models of glioblastoma. Taken together, this unbiased bioinformatic\napproach facilitated the discovery of a robust set of microglia and peripheral monocyte/macrophage expression\nmarkers to discriminate these monocyte populations in both health and disease....
Background: The investigation of intracellular metabolism is the mainstay in the biotechnology and physiology\nsettings. Intracellular metabolic rates are commonly evaluated using labeling pattern of the identified metabolites\nobtained from stable isotope labeling experiments. The labeling pattern or mass distribution vector describes the\nfractional abundances of all isotopologs with different masses as a result of isotopic labeling, which are typically\nresolved using mass spectrometry. Because naturally occurring isotopes and isotopic impurity also contribute to\nmeasured signals, the measured patterns must be corrected to obtain the labeling patterns. Since contaminant\nisotopologs with the same nominal mass can be resolved using modern mass spectrometers with high mass\nresolution, the correction process should be resolution dependent.\nResults: Here we present a software tool, ElemCor, to perform correction of such data in a resolution-dependent\nmanner. The tool is based on mass difference theory (MDT) and information from unlabeled samples (ULS) to\naccount for resolution effects. MDT is a mathematical theory and only requires chemical formulae to perform\ncorrection. ULS is semi-empirical and requires additional measurement of isotopologs from unlabeled samples. We\nvalidate both methods and show their improvement in accuracy and comprehensiveness over existing methods\nusing simulated data and experimental data from Saccharomyces cerevisiae. The tool is available at https://github.\ncom/4dsoftware/elemcor.\nConclusions: We present a software tool based on two methods, MDT and ULS, to correct LC-MS data from\nisotopic labeling experiments for natural abundance and isotopic impurity. We recommend MDT for low-mass\ncompounds for cost efficiency in experiments, and ULS for high-mass compounds with relatively large spectral\ninaccuracy that can be tracked by unlabeled standards....
Background: Long non-coding RNAs play an important role in human complex diseases. Identification of lncRNAdisease\nassociations will gain insight into disease-related lncRNAs and benefit disease diagnoses and treatment.\nHowever, using experiments to explore the lncRNA-disease associations is expensive and time consuming.\nResults: In this study, we developed a novel method to identify potential lncRNA-disease associations by\nIntegrating Diverse Heterogeneous Information sources with positive pointwise Mutual Information and Random\nWalk with restart algorithm (namely IDHI-MIRW). IDHI-MIRW first constructs multiple lncRNA similarity networks and\ndisease similarity networks from diverse lncRNA-related and disease-related datasets, then implements the random\nwalk with restart algorithm on these similarity networks for extracting the topological similarities which are fused\nwith positive pointwise mutual information to build a large-scale lncRNA-disease heterogeneous network. Finally,\nIDHI-MIRW implemented random walk with restart algorithm on the lncRNA-disease heterogeneous network to\ninfer potential lncRNA-disease associations.\nConclusions: Compared with other state-of-the-art methods, IDHI-MIRW achieves the best prediction performance.\nIn case studies of breast cancer, stomach cancer, and colorectal cancer, 36/45 (80%) novel lncRNA-disease\nassociations predicted by IDHI-MIRW are supported by recent literatures. Furthermore, we found lncRNA LINC01816\nis associated with the survival of colorectal cancer patients. IDHI-MIRW is freely available at https://github.com/\nNWPU-903PR/IDHI-MIRW....
Backgrounds: Next-Generation Sequencing (NGS) is now widely used in biomedical research for various\napplications. Processing of NGS data requires multiple programs and customization of the processing pipelines\naccording to the data platforms. However, rapid progress of the NGS applications and processing methods urgently\nrequire prompt update of the pipelines. Recent clinical applications of NGS technology such as cell-free DNA,\ncancer panel, or exosomal RNA sequencing data also require appropriate customization of the processing pipelines.\nHere, we developed SEQprocess, a highly extendable framework that can provide standard as well as customized\npipelines for NGS data processing.\nResults: SEQprocess was implemented in an R package with fully modularized steps for data processing that can\nbe easily customized. Currently, six pre-customized pipelines are provided that can be easily executed by nonexperts\nsuch as biomedical scientists, including the National Cancer Instituteâ??s (NCI) Genomic Data Commons (GDC)\npipelines as well as the popularly used pipelines for variant calling (e.g., GATK) and estimation of allele frequency,\nRNA abundance (e.g., TopHat2/Cufflink), or DNA copy numbers (e.g., Sequenza). In addition, optimized pipelines for\nthe clinical sequencing from cell-free DNA or miR-Seq are also provided. The processed data were transformed into\nR package-compatible data type â??ExpressionSetâ?? or â??SummarizedExperimentâ??, which could facilitate subsequent data\nanalysis within R environment. Finally, an automated report summarizing the processing steps are also provided to\nensure reproducibility of the NGS data analysis.\nConclusion: SEQprocess provides a highly extendable and R compatible framework that can manage customized\nand reproducible pipelines for handling multiple legacy NGS processing tools....
Background: High-throughput amplicon sequencing of environmental DNA (eDNA metabarcoding) has become a\nroutine tool for biodiversity survey and ecological studies. By including sample-specific tags in the primers prior PCR\namplification, it is possible to multiplex hundreds of samples in a single sequencing run. The analysis of millions of\nsequences spread into hundreds to thousands of samples prompts for efficient, automated yet flexible analysis\npipelines. Various algorithms and software have been developed to perform one or multiple processing steps, such\nas paired-end reads assembly, chimera filtering, Operational Taxonomic Unit (OTU) clustering and taxonomic\nassignment. Some of these software are now well established and widely used by scientists as part of their\nworkflow. Wrappers that are capable to process metabarcoding data from raw sequencing data to annotated OTUto-\nsample matrix were also developed to facilitate the analysis for non-specialist users. Yet, most of them require\nbasic bioinformatic or command-line knowledge, which can limit the accessibility to such integrative toolkits.\nFurthermore, for flexibility reasons, these tools have adopted a step-by-step approach, which can prevent an easy\nautomation of the workflow, and hence hamper the analysis reproducibility.\nResults: We introduce SLIM, an open-source web application that simplifies the creation and execution of\nmetabarcoding data processing pipelines through an intuitive Graphic User Interface (GUI). The GUI interact with\nwell-established software and their associated parameters, so that the processing steps are performed seamlessly\nfrom the raw sequencing data to an annotated OTU-to-sample matrix. Thanks to a module-centered organization,\nSLIM can be used for a wide range of metabarcoding cases, and can also be extended by developers for custom\nneeds or for the integration of new software. The pipeline configuration (i.e. the modules chaining and all their\nparameters) is stored in a file that can be used for reproducing the same analysis.\nConclusion: This web application has been designed to be user-friendly for non-specialists yet flexible with\nadvanced settings and extensibility for advanced users and bioinformaticians. The source code along with full\ndocumentation is available on the GitHub repository (https://github.com/yoann-dufresne/SLIM) and a\ndemonstration server is accessible through the application website (https://trtcrd.github.io/SLIM/)....
Loading....